114 research outputs found

    Ceramic Thin Films with Embedded Magnetic Nanofibers or Nanorods of Controlled Orientations

    Get PDF
    Ceramic thin films embedded with oriented magnetic nanofibers or nanorods are highly demanded for the applications in remote sensing, electromagnetic shielding, and thermal management at high temperatures. The general strategy for developing ceramic composite thin films with aligned magnetic nanorods or nanofibers has not been developed yet. This dissertation is centered on fundamentally understanding a sol-gel and polymer-based route towards creation of ceramic thin films with aligned magnetic nanostructures. The topics cover fabrication and properties of ceramic and ceramic-based nanofibers, precipitating magnetic nanoparticles within ceramic fibers, aligning and embedding nanofibers or nanorods within ceramic films, and preventing cracking during the sol-gel film dip-coating processing on flat substrates or on substrates with protrusions such as nanorods or nanofibers. The recent status and challenges in developing ceramic based nanocomposite and its potential applications are reviewed in chapter I. The feasible methodologies and general approaches are described. In chapter II and chapter III, we present the development of mullite and mullite-based composite nanofibers as potential fillers in ceramic thin films. The detailed schemes of materials formation and approaches for microstructure control are discussed in detail. The mechanical and magnetic properties of the mullite-based fibers are studied. In chapter IV, the high temperature in situ precipitation of nickel nanoparticles within the mullite fiber host is studied, to fundamentally understand the processing mechanism and its potential for high temperatures applications. In chapter V, we present the fundamental understanding of processing crack-free mullite thin films by sol-gel method. In chapter VI, the scientific approach is described for processing macroscopic ceramic thin films embedded with magnetic nanorods of controlled alignment. In chapter VII, the ceramic thin film formation with embedded nanorods is studied both theoretically and experimentally. The mechanism and criterion of microscopic cracking within the thin film composites is discussed

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    Relighting4D: Neural Relightable Human from Videos

    Full text link
    Human relighting is a highly desirable yet challenging task. Existing works either require expensive one-light-at-a-time (OLAT) captured data using light stage or cannot freely change the viewpoints of the rendered body. In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations. Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps. These neural fields are further integrated into reflectance-aware physically based rendering, where each vertex in the neural field absorbs and reflects the light from the environment. The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization. Extensive experiments on both real and synthetic datasets demonstrate that our framework is capable of relighting dynamic human actors with free-viewpoints.Comment: ECCV 2022; Project Page https://frozenburning.github.io/projects/relighting4d Codes are available at https://github.com/FrozenBurning/Relighting4

    SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections

    Full text link
    In this work, we present SceneDreamer, an unconditional generative model for unbounded 3D scenes, which synthesizes large-scale 3D landscapes from random noise. Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations. At the core of SceneDreamer is a principled learning paradigm comprising 1) an efficient yet expressive 3D scene representation, 2) a generative scene parameterization, and 3) an effective renderer that can leverage the knowledge from 2D images. Our approach begins with an efficient bird's-eye-view (BEV) representation generated from simplex noise, which includes a height field for surface elevation and a semantic field for detailed scene semantics. This BEV scene representation enables 1) representing a 3D scene with quadratic complexity, 2) disentangled geometry and semantics, and 3) efficient training. Moreover, we propose a novel generative neural hash grid to parameterize the latent space based on 3D positions and scene semantics, aiming to encode generalizable features across various scenes. Lastly, a neural volumetric renderer, learned from 2D image collections through adversarial training, is employed to produce photorealistic images. Extensive experiments demonstrate the effectiveness of SceneDreamer and superiority over state-of-the-art methods in generating vivid yet diverse unbounded 3D worlds.Comment: Project Page https://scene-dreamer.github.io/ Code https://github.com/FrozenBurning/SceneDreame

    Text2Light: Zero-Shot Text-Driven HDR Panorama Generation

    Full text link
    High-quality HDRIs(High Dynamic Range Images), typically HDR panoramas, are one of the most popular ways to create photorealistic lighting and 360-degree reflections of 3D scenes in graphics. Given the difficulty of capturing HDRIs, a versatile and controllable generative model is highly desired, where layman users can intuitively control the generation process. However, existing state-of-the-art methods still struggle to synthesize high-quality panoramas for complex scenes. In this work, we propose a zero-shot text-driven framework, Text2Light, to generate 4K+ resolution HDRIs without paired training data. Given a free-form text as the description of the scene, we synthesize the corresponding HDRI with two dedicated steps: 1) text-driven panorama generation in low dynamic range(LDR) and low resolution, and 2) super-resolution inverse tone mapping to scale up the LDR panorama both in resolution and dynamic range. Specifically, to achieve zero-shot text-driven panorama generation, we first build dual codebooks as the discrete representation for diverse environmental textures. Then, driven by the pre-trained CLIP model, a text-conditioned global sampler learns to sample holistic semantics from the global codebook according to the input text. Furthermore, a structure-aware local sampler learns to synthesize LDR panoramas patch-by-patch, guided by holistic semantics. To achieve super-resolution inverse tone mapping, we derive a continuous representation of 360-degree imaging from the LDR panorama as a set of structured latent codes anchored to the sphere. This continuous representation enables a versatile module to upscale the resolution and dynamic range simultaneously. Extensive experiments demonstrate the superior capability of Text2Light in generating high-quality HDR panoramas. In addition, we show the feasibility of our work in realistic rendering and immersive VR.Comment: SIGGRAPH Asia 2022; Project Page https://frozenburning.github.io/projects/text2light/ Codes are available at https://github.com/FrozenBurning/Text2Ligh

    SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis

    Full text link
    Neural Radiance Field (NeRF) significantly degrades when only a limited number of views are available. To complement the lack of 3D information, depth-based models, such as DSNeRF and MonoSDF, explicitly assume the availability of accurate depth maps of multiple views. They linearly scale the accurate depth maps as supervision to guide the predicted depth of few-shot NeRFs. However, accurate depth maps are difficult and expensive to capture due to wide-range depth distances in the wild. In this work, we present a new Sparse-view NeRF (SparseNeRF) framework that exploits depth priors from real-world inaccurate observations. The inaccurate depth observations are either from pre-trained depth models or coarse depth maps of consumer-level depth sensors. Since coarse depth maps are not strictly scaled to the ground-truth depth maps, we propose a simple yet effective constraint, a local depth ranking method, on NeRFs such that the expected depth ranking of the NeRF is consistent with that of the coarse depth maps in local patches. To preserve the spatial continuity of the estimated depth of NeRF, we further propose a spatial continuity constraint to encourage the consistency of the expected depth continuity of NeRF with coarse depth maps. Surprisingly, with simple depth ranking constraints, SparseNeRF outperforms all state-of-the-art few-shot NeRF methods (including depth-based models) on standard LLFF and DTU datasets. Moreover, we collect a new dataset NVS-RGBD that contains real-world depth maps from Azure Kinect, ZED 2, and iPhone 13 Pro. Extensive experiments on NVS-RGBD dataset also validate the superiority and generalizability of SparseNeRF. Project page is available at https://sparsenerf.github.io/.Comment: Technical Report, Project page: https://sparsenerf.github.io

    CityDreamer: Compositional Generative Model of Unbounded 3D Cities

    Full text link
    In recent years, extensive research has focused on 3D natural scene generation, but the domain of 3D city generation has not received as much exploration. This is due to the greater challenges posed by 3D city generation, mainly because humans are more sensitive to structural distortions in urban environments. Additionally, generating 3D cities is more complex than 3D natural scenes since buildings, as objects of the same class, exhibit a wider range of appearances compared to the relatively consistent appearance of objects like trees in natural scenes. To address these challenges, we propose CityDreamer, a compositional generative model designed specifically for unbounded 3D cities, which separates the generation of building instances from other background objects, such as roads, green lands, and water areas, into distinct modules. Furthermore, we construct two datasets, OSM and GoogleEarth, containing a vast amount of real-world city imagery to enhance the realism of the generated 3D cities both in their layouts and appearances. Through extensive experiments, CityDreamer has proven its superiority over state-of-the-art methods in generating a wide range of lifelike 3D cities.Comment: Project page: https://haozhexie.com/project/city-dreame

    Dynamic Hand Gesture Classification Based on Multistatic Radar Micro-Doppler Signatures Using Convolutional Neural Network

    Get PDF
    We propose a novel convolutional neural network (CNN) for dynamic hand gesture classification based on multistatic radar micro-Doppler signatures. The timefrequency spectrograms of micro-Doppler signatures at all the receiver antennas are adopted as the input to CNN, where data fusion of different receivers is carried out at an adjustable position. The optimal fusion position that achieves the highest classification accuracy is determined by a series of experiments. Experimental results on measured data show that 1) the accuracy of classification using multistatic radar is significantly higher than monostatic radar, and that 2) fusion at the middle of CNN achieves the best classification accuracy

    EVA3D: Compositional 3D Human Generation from 2D Image Collections

    Full text link
    Inverse graphics aims to recover 3D models from 2D observations. Utilizing differentiable rendering, recent 3D-aware generative models have shown impressive results of rigid object generation using 2D images. However, it remains challenging to generate articulated objects, like human bodies, due to their complexity and diversity in poses and appearances. In this work, we propose, EVA3D, an unconditional 3D human generative model learned from 2D image collections only. EVA3D can sample 3D humans with detailed geometry and render high-quality images (up to 512x256) without bells and whistles (e.g. super resolution). At the core of EVA3D is a compositional human NeRF representation, which divides the human body into local parts. Each part is represented by an individual volume. This compositional representation enables 1) inherent human priors, 2) adaptive allocation of network parameters, 3) efficient training and rendering. Moreover, to accommodate for the characteristics of sparse 2D human image collections (e.g. imbalanced pose distribution), we propose a pose-guided sampling strategy for better GAN learning. Extensive experiments validate that EVA3D achieves state-of-the-art 3D human generation performance regarding both geometry and texture quality. Notably, EVA3D demonstrates great potential and scalability to "inverse-graphics" diverse human bodies with a clean framework.Comment: Project Page at https://hongfz16.github.io/projects/EVA3D.htm
    • …
    corecore